QA & Testing · Phase 05 of 05

Turn a feature into
tests, tags, and a
repeatable workflow.

The attached QA workshop deck is not about manual validation process. It is about building an AI-assisted testing loop: write a strong prompt, generate meaningful tests, commit them with the feature, and let the pipeline produce requirement-tagged proof automatically.

GitHub Copilot Claude Code Jest REQ Tags Hours Tracking POC

Manual testing is slow to execute, slow to prove, and easy to break under audit pressure.

The deck opens with a blunt case for change: in a regulated environment, manual testing creates timing problems and evidence problems at the same time.

Pain Point 01

Regression takes weeks

Every code change can trigger a large manual rerun. The cost is not just time spent clicking through flows. It is the queue created before a release can move.

The workshop treats this as structural waste, not a staffing issue.
Pain Point 02

Documentation is repetitive and error-prone

IQ/OQ style evidence and validation notes assembled by hand drift from the actual code and the actual run results. The more manual the package, the easier it is to miss something.

The same evidence should be generated by the run, not reconstructed after it.
Pain Point 03

Traceability depends on manual bookkeeping

Linking requirements to tests through spreadsheets or after-the-fact mapping creates obvious failure points. The deck’s alternative is requirement tags embedded directly in test names and outputs.

Traceability should be part of the test artifact, not a second document somebody updates later.
Pain Point 04

PRs ship without a real safety net

Without automated gates, code can merge before tests actually prove the feature behavior. In the workshop model, no test means no merge.

The merge gate is the enforcement mechanism, not a team norm written on a slide.

The AI-assisted pattern is straightforward: prompt, generate, attach, run, report.

The deck’s core workflow is a five-step chain from feature context to audit-ready PR output.

1

Write a test prompt

Name the feature, state the acceptance criteria, specify the test type, and call out the edge cases that matter.

2

AI generates the test cases

Copilot or Claude Code produces runnable tests with the right structure, branch coverage targets, and requirement tags.

3

Commit tests with the feature

The test file lands in the same PR as the code so the link between behavior and validation is preserved at review time.

4

GitHub Actions runs the suite

The pipeline executes automatically on the PR, collects artifacts, and enforces pass/fail gates before merge.

5

Results attach back to the PR

Requirement-tagged outputs, coverage, and reports become versioned artifacts on the same change record reviewers are already using.

Prompt Generated Tests Pull Request GitHub Actions Test Reports

A good test prompt gives the model five things it cannot safely infer.

The workshop’s prompt pattern is practical rather than abstract. It is designed to produce useful tests on the first pass, not generic boilerplate.

Ingredient 01

Feature context

Name the feature and what it is supposed to do. In the workshop example, that is project-based time entry in the Hours Tracking proof of concept.

Ingredient 02

Acceptance criteria

Reference the real AC from the story or spec. The model needs to know the exact pass/fail behavior, not just a paraphrased summary.

Ingredient 03

Test type

Say whether you want unit, integration, end-to-end, or regression tests. Otherwise the model has to guess both level and output format.

Ingredient 04

Edge cases

Call out nulls, invalid values, boundaries, and permission checks explicitly. The deck keeps returning to negative, null, and limit conditions.

Ingredient 05

GxP traceability note

If the tests must support audit review, include the requirement tag in the prompt so the generated file carries that metadata from the start.

Workshop Example
Hours Tracking prompt pattern: project-based time entry, explicit acceptance criteria, edge cases for negative hours and daily totals above 24, and a requirement tag like [REQ-HT-012] embedded in every test.
Tips & Tricks

Pick one requirement-tag format and never vary it by team or feature. Stable tags such as [REQ-HT-012] are what make PR summaries, reports, and grep-based traceability actually usable later.

The target output is not generic test code. It is a file that is already traceable and ready to run.

The deck’s generated Jest example makes three expectations clear: requirement tags are built in, edge cases are included automatically, and the file should drop directly into the repo.

Output 01

Requirement tags in every test

Tags such as [REQ-HT-012] appear directly in the describe block or test names so they can be surfaced later in reports and PR comments.

Output 02

Edge cases are part of the baseline

Negative values, null project codes, and boundary conditions like exactly 24 hours are expected outputs, not optional cleanup work for QA later.

Output 03

Runnable with minimal wiring

The generated artifact should be a real Jest file or equivalent test asset that can be committed, executed, and reviewed immediately.

timeEntry.test.js — AI-generated
// [REQ-HT-012] Time Entry Validation Tests
describe('validateTimeEntry', () => {

  it('accepts valid 7.5h entry', () => {
    expect(() => validate({ projectCode: 'A', hours: 7.5 }))
      .not.toThrow();
  });

  it('rejects negative hours', () => {
    expect(() => validate({ projectCode: 'A', hours: -1 }))
      .toThrow('Hours cannot be negative');
  });

  it('rejects daily total > 24h', () => {
    expect(() => validate({ projectCode: 'A', hours: 25 }))
      .toThrow('Daily total cannot exceed 24');
  });

  it('rejects null project code', () => {
    expect(() => validate({ projectCode: null, hours: 8 }))
      .toThrow('Project code required');
  });

  it('allows exactly 24h on one project', () => {
    expect(() => validate({ projectCode: 'A', hours: 24 }))
      .not.toThrow();
  });

});

Page takeaway: the workshop treats test generation as structured authoring, not magic. If the prompt contains feature behavior, test scope, edge cases, and requirement tags, the AI can produce artifacts that are meaningful in both engineering review and audit review.

Where this fits in the full pipeline.

Phase 04 — Verification. AI-assisted test generation and PR gates are the verification layer of the SDLC.

Phase 04 — Verification
AI-Powered SDLC pipeline diagram with Phase 04 Verification highlighted.

Scroll right to see full pipeline →